#class 12 computer science python chapter 3
Explore tagged Tumblr posts
Text
Multibagger stock recommendation 🚀
View On WordPress
#11 cbse computer science syllabus#a donation of mageweave tbc#a quality primary education#cbse class 12 computer science mcq#cbse computer science deleted syllabus#class 12 computer science all chapter#class 12 computer science python chapter 3#education adda math#graduation admission 2021#graduation admission in bihar 2021
0 notes
Text
Top 246 Sonic Releases of 2020
001. Perfume Genius - Set My Heart On Fire Immediately 002. Splash Pattern - Sentinel 003. Pontiac Streator - Triz 004. gayphextwin & Pépe - gayphextwin / Pépe EP 005. Various Artists - SLINK Volume 1 006. Echium - Disruptions of Form 007. Henry Greenleaf - Caught 008. Sega Bodega - Reestablishing Connection 009. Summer Walker - Life on Earth 010. Charli XCX - how i'm feeling now 011. Various Artists - Physically Sick 3 012. Autechre - Sign 013. Off The Meds - Off The Meds 014. Brent Faiyaz - Fuck The World 015. Luis Pestana - Rosa Pano 016. Reinartz - Ravecoil 017. pent - - 018. Mark Leckey - In This Lingering Twilight Sparkle 019. Various Artists - Sharpen, Moving 020. Vanessa Worm - Vanessa 77 021. Aho Ssan - Simulacrum 022. Lyra Pramuk - Fountain 023. PJ Harvey - Dry Demos 024. Felicia Atkinson - Echo 025. Arca - KiCK i 026. Space Afrika - hybtwibt 027. Ambien Baby - Mindkiss 028. The Gasman - Voyage 029. Inigo Kennedy - Arcadian Falls 030. Raft of Trash - Likeness on the Edge of Town 031. OL - Wildlife Processing 032. Fyu-Jon - Furrow 033. Desire Marea - Desire 034. Octo Octa - Love Hypnosis Vol. 1 035. Phoebe Bridgers - Punisher 036. Jesse Osborne-Lanthier - Left My Brain @ Can Paixano (La Xampanyeria) OST 037. Various Artists - She's More Wild 038. Various Artists - Days Of Future Past [White Material] 039. Foul Play - Origins 040. Late Night Approach - The Naus Investigation 041. Amazondotcom & Siete Catorce - Vague Currency 042. Davis Galvin - Ntih / Icia 043. Patiño - Actually Laughing Out Loud 044. Various Artists - 2nd Anniversary Compilation [All Centre] 045. St-Antoine, Feu - L'eau Par La Soif 046. Xozgk - skllpt 047. Various Artists - The Sun is Setting on the World 048. DJ Python - Mas Amable 049. Peter Van Hoesen - Chapter for the Agnostic 050. Tracing Xircles - Air Lock 051. Ben & Jerry - Formant Fry 052. still house plants - Fast Edit 053. D-Leria - Still Standing 054. Florian T M Zeisig - Coatcheck 055. Hanne Lippard - Work 056. Shedbug & Rudolf C - Honey Mushrooms II 057. Carl Stone - Stolen Car 058. Ruth Anderson - Here 059. Sid Quirk - Ginnel Talk 060. Various Artists - Fluo I [Kindergarten Records] 061. Pump Media Unlimited - Change 062. VC-118A - Crunch / Plonk 063. Beatriz Ferreyra - Echos+ 064. Bearer - Precincts 065. PARSA - PAƬCHƜȜRKZ 1 066. Holly Childs & Gediminas Žygus - Hydrangea 067. Cosmin TRG - Remote 068. Obsequies - Carcass 069. Jake Muir - the hum of your veiled voice 070. No Moon - Set Phasers to Stun 071. Olli Aarni - Mustikoita ja kissankelloja 072. E-Unity - Duo Road EP 073. Benedek - Mr. Goods 074. Extinction Room - Extinction Stories 075. Hodge - Shadows In Blue 076. Various Artists - Tiny Planet Vol. 2 077. Floco Floco - On m'a dit 078. Breather - Ceremonies Of Aporia 079. Unknown Mobile - Leafy Edits Vol. 2 080. Wetman & Sword of Thorns - Apt E Vol 2 081. Borderlandstate_ the Best Kisser in L.A. - Hello Mainframe 082. Kiera Mulhern - De ossibus 20 083. Mads Kjeldgaard - Hold Time 084. Тпсб - Whities 031 EP 085. Network Glass - Twitch 086. a2a - A2A¹ EP 087. Wata Igarashi - Traveling 088. Joey G II - Pub Talk 089. Atom™ - <3 090. Valentina Magaletti & Marlene Ribeiro - Due Matte 091. Ewan Jansen - Island Diary 092. HOOVER1 - HOOVER1-4 093. Nazar - Guerrilla 094. Paradise Cinema - Paradise Cinema 095. Daisies - Daisies in the Studio with DJ Rap Class 096. Alloy Sea - Petrichor 097. Flørist - Intermedia 1 EP 098. Nandele - FF 099. Pro.tone - Zero Day Attack 100. Michael J. Blood - Introducing Michael J Blood 101. Various Artists - RV Trax, Vol. 5 102. DJ Plead - Going for It EP 103. Strategy - The Babbling Brook 104. Various Artists - surf000 105. Deft - Burna 106. Various Artists - WorldWideWindow 107. Lucy Liyou - Welfare 108. O-Wells - Ebecs 109. Special Request - Spectral Frequency EP 110. Anunaku - Stargate 111. Scott Young - Ket City 112. Various Artists - Stir Crazy Vol.1 113. Syz - Bunzunkunzun 114. Oozy Zoo - Sabertooth 115. Vanessa Amara - Poses 116. Carl Finlow - Apparatus 117. Al Wootton - Snake Dance EP 118. Oi Les Ox - Crooner qui coule sous les clous 119. aircode - Effortless 120. Tristan Arp - Slip 121. Andrea - Ritorno 122. Russell Ellington Langston Butler - Emotional Bangers Only 123. The Lone Flanger - The Photon's Path 124. SHelley Parker & Peder Mannerfelt - Decouple ]( Series 125. Esplendor Geometrica - Cinética 126. Casey MQ - babycasey 127. Gacha Bakradze - Western Arrogance 128. Fatherhood - Big Boy 129. Blawan - Make A Goose 130. Roza Terenzi, Roza - Modern Bliss 131. AceMo - SYSTEM OVERRIDE 132. Meitei - Kofū 133. Penelope Trappes - Eel Drip 134. Adult Fantasies - Towers of Silence 135. Plush Managements Inc. - Magic Plush 136. Further Reductions - array 137. Ben Bondy & Exael - Aphelion Lash 138. Pugilist - Blue Planet EP 139. Dylan Henner - The Invention of the Human 140. Cindy - I'm Cindy 141. Ulla - Tumbling Towards a Wall 142. EMMA DJ - PZSÅRIASISZSZ TAPE 143. BufoBufo - Potholing 144. Model Home - Live 5-12-20 145. Low Budget Aliens - Junk DNA 146. Paranoid London - PLEDITS#2 147. Emra Grid - A System A Platform A Voiid 148. J. Albert - Pre Formal Audio 149. Dawl - Break It Down 150. Oall Hates - Tranceporter 151. Mystic Letter K - Cosmic Clearance [MLK4, 2020] 152. Coco Bryce - Lost City Archives Vol 2 153. Hagan - Waves 154. Various Artists - ON+ON+ON 155. INVT - EXTREMA 156. C Powers - Redirections Vol. 1 157. Significant Other - Club Aura 158. Client_03 - Thought disposal 159. Ghost Phone - LOCKDOWN BODY EDITS 160. Two Shell - N35 161. Rhyw - Loom High 162. EAMS - Demode 163. Various Artists - Woozy001 164. Society Of Silence – Réalisme Viscéral 165. HVL - Alignment 166. Alan Johnson - Material World 167. Matthew D Gantt - Diagnostics 168. DJ Detox - RM12009 169. Critical Amnesia - Critical Amnesia 170. Neinzer - Whities 025 EP 171. Despina & Ma Sha Ru - Polychronia 172. Divide - Computer Music 173. URA - Blue [NAFF008, 2020] 174. Forest Drive West - Terminus EP 175. Glacci - Alzarin _ Lavvender Rush 176. Fergus Sweetland - Fergus Sweetland 177. Various Artists - C12 - Social Distancing 1.1 178. A-Sim - The Puppet Master 179. Chlär - Power to the Soul 180. Will Hofbauer - Where Did All The Hay Go 181. Protect-U - In Harmony Of An Interior World 182. Instinct & 0113 - Instinct 11 183. Ribbon Stage - My Favorite Shrine 184. Zenker Brothers - Mad System 185. 2Lanes - Baby's Born To Fish... - Impish Desires 186. Nebulo - Parallaxes 187. Martyn Bootyspoon - Lickety Split 188. Erik Griswold - All's Grist That Comes To The Mill 189. Alex Falk - Movefast 190. DJ SWISHA & Kanyon - Club Simulator EP 191. Happa - Ls14 Battler _ 36Th Chamberlain (Remixes) 192. Svreca - FRUE 193. Anz - Loos In Twos (NRG) 194. James King - rinsed - installed 195. Catartsis & Ōtone - Mechanical Gesture 196. Daniel J. Gregory - Life Is A Bin 197. Desert Sound Colony - Pulled Through The Wormhole EP 198. Floral Resources - TS00000? 199. Alex R - Last Attempt 200. Notzing - The Abuse Of Hypnosis In Dance Environments 201. Brain Rays & Quiet - Butter [SR081A, 2020] 202. Benjamin Damage - Deep Space Transit 203. BROSHUDA - Contemplative Figuration 204. Various Artists - Radiant Love IWD Comp 205. Paradise 3001 - Low Sun Archives 206. 011668 & S280F - Os 207. Kubota, Kazuma - Mind 208. HATENA - HANDZ 209. Leonce - Seconds & Fifths EP 210. Furtive - Sympathies IV 211. French II - Time / Tracker 212. qwizzz - slag ep 213. Gag Reflex - The Fae 214. Luca Lozano & Mr. Ho - Homeboys 215. CONCEPTUAL - Introspective Research 216. Xyla - Ways 217. Minor Science - Second Language 218. Fana - Karantina 219. Current Obsession - XXX 220. K. Frimpong & Super Complex Sounds - Ahyewa 221. Ali Berger - The Stew 222. Sleep D - Smoke Haze 223. Nick León - MAZE 224. DJ Delish - Khadijah Vol. 6 225. Sputnik One - Kerosene 226. OOBE - SFTCR 2 227. Burrell Connection - Breaks That Strung the Camel Back 228. Wayne Phoenix - Soaring Wayne Phoenix Story The Earth 229. D.Dan - Mutant Future 230. Distance Dancer - Distance Dancer 231. Nikki Nair - Number One Slugger 232. Vinicius Honorio - Metamorphosis 233. Tracey - Microdancer EP 234. Ntu - Perfect Blue 235. Bliss Inc. - Hacking The Planet 236. JLTZ - Tools From Another Mother 237. Omnipony - GHOST1 238. WTCHCRFT - ACID EP vol. 2 239. Mike - Weight Of The World 240. Hypnaton - Hypnaton 241. Granary 12 - High 1987 242. Elisa Bee - Orbit EP 243. Stones Taro - Pump EP 244. Alexis - Refractions 245. Ntel - The Dilution Effect 246. X.WILSON - YUK
6 notes
·
View notes
Text
NCERT Solutions Class 11 Computer Science Free PDF Download
To free download NCERT Questions and answers of NCERT Books All Classes Physics, Chemistry, Biology, History, Political Science, Economics, Geography, Computer Science, Accountancy, Business Studies, Hindi, English, Mathematics, EVS, Social Science and Home Science; do check NCERTPREP website. This site provides sample papers with solution, test papers for chapter-wise practice, NCERT book solution, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE students and NCERTPREP website.
Class 11 NCERT Solution Computer Science Python includes all the questions given in NCERT Books for all Subject. Here all questions are solved with detailed information and available for free to check. NCERT Solutions Class 11 Computer Science Python are given here for all chapter wise. Select the subject and choose chapter to view NCERT Solution chapter wise.
Computer Science is a practical subject. Deriving every answer on your own is a tedious task. Most of the students find it difficult to solve the problems or the practice exercise of the NCERT textbook difficult. So, what’s the best way out? The best way is to have a solution book. The first unit comprises of computer fundamentals, software concepts, data representation, microprocessor, and memory. The basics of Operating systems and some common algorithms are dealt with here. The data representation is the main and very important chapter of computer science.
The main concept of binary numbers and how they are stored in computer memory is well explained. A student should learn to convert a decimal number into binary and vice versa. Practice the question given at the end of your NCERT books and verify the answer from the solution book.
The solution book also explains step by step how the answer has been derived. All the concepts related to microprocessors such as Instructions sets, 8085, and 8086 microprocessors have been explained in the fourth chapter.
Moving over to the next unit i.e. program methodology teaches us how to write any language in syntax. How the comments are used and why writing comments is very important in coding. The next chapter algorithms and flowcharts throw light over modular and structured programming. The various operators such as AND, OR, and NOT have been discussed in detail over here.
The third and the fourth unit deals with the language “PYTHON”. Python is a programming language basically the most trending and acceptable language in today’s world. You can build anything to everything by using python’s libraries and tools. It is used in web development, Blockchain development, somewhere in AI and ML as well. It’s a very easy language with many active communities worldwide. It has been rated as 5/5 over the user-friendly ratings.
The third unit basically teaches you the basics of python, the operators, functions, and loops. The fifth unit calls for some nice coding skills. A solution book helps a student in the right kind of logic building so that the code development process goes clear in the kind of a student
NCERT Solutions of NCERT Books All Classes for CBSE class 3, 4, 5, 6, 7, 8, 9, 10, 11 & 12 are very helpful to students. Although, NCERT solutions contain only chapter-end questions and answers yet these are considered as key questions. Most of the questions in exams are either same or similar to these questions. So, it is advised that students must go through the NCERT Text Books and practice all the questions given at the end of the chapter. These questions will clear their basic doubts. We also recommend students should read the whole NCERT book line by line and prepare notes from NCERT books. It is always recommended to study NCERT books as it covers the whole syllabus. These questions with detailed explanation are now available in NCERTPREP.com for free to view and download.
First of all, Student must understand that NCERT textbook answers are not enough for exam preparation. Therefore, they must take NCERT textbook question and answers as basic learning tools. These questions and answers are basically meant for understanding the concepts. NCERT textbooks are certainly a good source of quality content. Hence, it is expected that students should not settle for chapter end questions only rather they should read the whole book thoroughly.NCERT Solution of NCERT Books All Classes are available in PDF format for free download. These ncert book chapter wise questions and answers are very helpful for CBSE exam. CBSE recommends NCERT books and most of the questions in CBSE exam are asked from NCERT textbooks.
We hope that our NCERT Solutions Class 11 Computer Science Python helped with your studies! If you liked our NCERT Solutions for Class 11, please share this post.
1 note
·
View note
Text
AY2019/2020 Y1S1 Module Review
AY2019/2020 year 1 semester 1 review
Started school around august after orientation camp in july, and had to study after doing nothing for months after a levels and finally had the taste of the rigour of this major.. semester 1 went by too quickly..
Modules taken this semester:
CS1010S
MA1101R
MA1521
BT1101
GER1000
CS1010S Programming Methodology (Python)
Prof: Ben Leong
Exam Dates: 2 Oct (Midterm) / 16 Nov (Practical Exam) / 27 Nov (Finals)
Weightage:
Coursemology – 25% (25%)
Participation – 5% (5%)
Midterm test – 15% (-)
Practical exam – 15% (20%)
Final assessment – 40% (50%)
(those in brackets are for those taking alternative final)
S in CS1010S is for science students, most students are either science students (DSA/ Life Science plenty) or BZA students.
Overall this module easily had the highest workload compared to other modules, having to rush missions every week, complete tutorials (this is pretty standard duh) and lecture trainings before deadlines for bonus points on a gamified platform. One could sit at their table wracking their brains for the whole day and still not be able to come up with a feasible code, or have their codes stuck with some bugs and not knowing how to continue. Really, without the help from fellow friends this module would be hard to get through. Luckily my TA was kind (and patient!) enough to explain such that my brain could get it. Ended up having to IP this module sadly… This module really requires your wholehearted devotion and really tests your patience i must say, especially for people who are not too intellectually inclined (aka me)..
They introduced a new scheme this semester aka Alternative Final, meaning you get to retake your midterm and finals by tabao-ing it into the next sem except you do your finals during the recess week instead of the finals, kinda like a half-retaking a module? Your grades for finals are IP-ed (in progress) rather than letter grades and the finals and midterms will be accounted for in the following half a semester albeit under different weightage components.
They said its a introductory module, but …………..
This year’s practical exam was particularly hard i think i had friends (even the zai ones) getting single digit marks… banked full in on the Method of Life question (Q5) of finals which is a giveaway question asking you how you can apply the concepts to other parts of your life and your main take-aways from taking this module (filled up the whole page and got full marks for it 4m) without this question i would have failed the paper..
Now i have to work hard the next sem… its kinda sad for us BZA students because CS1010S is a prerequisite for those wanting to take BT2101 and CS2030/2040 modules in the following semester (y1s2). Future students (esp BZA) please take the advice to consider this when deciding whether to IP…. because guess who didnt and regretted not thinking deeper…..
Ah one more thing to take note is the weightage is quite different for those IP/ alternative final people, theres higher weightage for the papers :_D
Ben Leong is a pretty good lecturer, hes solid in delivering concepts except my brain may be a little too slow for him.. Theres also lecture videos online that you can refer to and thousands of papers (with solutions!!) waiting for you to do.. something uncommon for many modules i heard? also, you get to see your final (scanned) paper through a website, in ben leong’s words “how cool is that?” he also uploads the mark scheme for your reference which is pretty cool imo. He’s a very interesting lecturer.
MA1521 Calculus for Computing
Prof: Leung Pui Fai
Exams: No midterms, just an online quiz (4 questions, most get full marks for), and the finals
Weightage: cant really rmb the weightage but i think its 40-60? i think tutorial attendance isnt graded..
They said this was just a repeat of H2 maths with more stuffs well boy i must say this wasnt as easy as they said.. okay maybe for me, ive always struggled with maths for a really long time. Surprisingly got a B for H2 Maths, i got a B3 for O levels really the blemish in my results. Got a B- for this module. Many people will say this is an easy module, you can trust them a little different in my shoes i guess. I didnt turn up for lectures for the half part of the semester since he talks a bit too slowly so i just watch the webcast sped up. But being a procrastinator i’m really behind on webcasts by the time the exams came.. i think i spent too much time on CS1010S and its still not enough.. if you dont have the discipline to watch them religiously at home, i would suggest you go for the lecture even though he may talk abit slowly but it forces you to not miss out on them. I dont really had the time (is it i wonder?) to do the tutorials either so i was also behind on them.. most of the time i just sat for tutorials and took the answers down to only work on them many weeks later (much regrets) so i didnt really understood what was going on as the TA went through. please dont be like me… the recess week was for sure not enough to revise/ learn all the content for all your mods for both midterms/finals so please dont be lazy like me…. this is the suffering i brought upon myself TT
Overall i think. it is not that hard a mod if you do your work consistently.. things got a little confusing towards the end i heard they dropped a whole chapter this semester glad they did.
MA1101R Linear Algebra I
Prof: Wang Fei
Weightage:
Finals (28 Nov, 2h)— 60%.
Mid-term test (4 Oct, 2h)— 20%.
3 homework assignments (4% per assignment) — 12%.
An in-class Lab (MATLAB) quiz — 8%.
This was one of my most hardest period in my life and i say this on PERIODT. As if maths wasnt tough enough, this will really declare a survival of the fittest among your remaining brain cells. Friends told me maths came into their dreams… pls extinguish my soul. You must be thinking i am crazy for wanting to take 2 math mods in a sem right? ?
Yeahh no one really does that but it was my idea because i didnt want to do maths together with all the core core mods (BT and CS) next sem so i decided ah i should just get maths over and done with ( hAH real joke bc i couldnt clear CS1010S and i cant take 2k level mods for BT and CS and unlocked clown outfit because theres one more ST2334 core mod that involves probability and stats so much for thinking i will be over and done with for dealing with maths– someone tell me why did i choose this major again?)
Somehow along the way i realised the bell curve for this was surprisingly high i think those who chose this mod intend to delve even deeper in mathematics, mayhaps i joined the wrong major. The R in MA1101R actually stands for rigorous i didnt realise until my friend read the fine prints in the SOC Course Curriculum for BZA or sumn. Pure hell. There are 3 homework assignments (graded mind you) and most of the students get around 50++/60 i think i was the one of the rare few who flunked quite badly and always eyeballed by my TA (who is a prof for some 3k or 4k level maths, not for this mod though). I approached him for consults and for help and he was nice enough to sit me down and explain slowly. He’s pretty good at explaining slowly although he’s pretty fast in class (and most of the semester i had close to ZERO idea what was going on in class for pretty much most of the mods). Shockingly managed to pull out a C from my butt. The intellect of the students are no joke.. Homework assignments are every 3 weeks starting week 6 i think (so week 6, 9, 12) and i think are there to make sure you catch up with the work.
Oh lectures-wise, i sat for ½ of his classes, i really absorb almost nothing.. the rest of the lecture hall seem to get it though or so it seems. so i stopped attending my own lectures to watch the webcast for Prof Victor Tan too. His webcasts/lectures are really popular and it really owe it to his teaching, apparently he taught Wang Fei before and of course had over ten more years of experience. WF’s lecture turn-outs are comparatively less compared to VT. And on panopto (webcast platform) i think it was almost always 360++ views for VT as compared to a 80++/ was it 30++ for WF if i recall correctly. VT slides are also more concise and simple to understand where as WF’s ones are similar to the textbook. You are also required to purchase a textbook for this module costs around $20 from the co-op store in science and i urge you to purchase it asap when the profs announce they are made available bc they run oos quite fast.. the tutorial questions are from the textbook and the textbook is very simple and straightforward and put together by some of the lecturers/profs in school.
BT1101 Introduction to Business Analytics
Prof: Dr Sharon Tan, Desmond Ong
Weightage:
1. Online Quiz & Datacamp Assignments — 7%
Tutorial 1-4 — 8%
Tutorial 5 onwards — 15%
In-class Assessment (Written) — 10%
Practical Assessment — 20%
Final Assessment — 40%
In class assessment is held 2ish weeks after your midterms week so its kinda like your midterms?
Mm i would say this module is the most ?? its hard to put in words but if you read up the confessions page (NUSwhispers) regularly you would see many complaints that the mod is structured not as neatly as CS1010S its quite here and there everywhere and personal opinion, sometimes i dont know what i am supposed to learn but i guess its like that? The profs seem to value not wanting to spoonfeed and us learning on our own and stuff like that. I heard the mod was much harder in previous years and they simplified it a lot compared to in the past (which i really thank god) but its still a bit ?? They split it into two halves, first half of the sem is taught by Dr ST (Descriptive Analytics) and the next half by DO (Prescriptive and Predictive Analytics).
There are online videos to be watched every week even though you get lectures once every 2 weeks when Dr ST teaches and tutorials to be submitted to your TAs that are graded only after about 6/7 weeks. They leave comments (½ sentences someitmes shorter) and your marks received and thats about all so you dont really know where you went wrong since they are not marked paper and pen way. The tutorials are coding exercises for questions using the R language. They also used Datacamp to drill some of the basics of R for a headstart. Her workshop style lectures are a lot of on the spot learning how to code and stuff which i lag behind a bit because she goes a bit fast in order to cover everything. We learn new content via the online videos that we have to watch every week and theres quizzes for them too weekly iirc.
The next half by DO had no online videos (great!! and no quizzes!!) but weekly lectures and graded tutorials are due every 2 weeks(!!). There are still weekly tutorials but its only graded for every 2nd one, wow this saved me a lot of time phew. I didnt get to do the tutorials for those that are not graded but read through the questions so that i get a gist of whats going on, and somehow i really dont have the time to do it? CS1010S really absorbed a large chunk of my time cries. Finals was a oK it was not that bad i think. There are 20 MCQs and then about 4 structured questions? Closed-book with 1 A4 sheet cheatsheet.
Oh and the bad part about the tutorials are the tutors wont provide you with the model answers/codes so you’re really just on your own. You either get it or nah. :_D
GER1000 Quantitative Reasoning
Weightage:
1. Tutorial — 10%
2. 10 Weekly Quizzes — 20%
3. Project —35% (Presentation 10%, Final Report 25%)
4. Finals (28 MCQs, 2h) — 35%
No lectures so no profs, just weekly online videos and quizzes.
Tutorials are every odd/even weeks depending on the slot you chose.
Groups are arranged by the TAs beforehand.
This was pre-allocated for us so (grits teeth). Honestly a waste of time. One of the mods i neglected till the end to focus on other mods (which was worth it). The workload was manageable, of course (if not how to neglect). Every 2 week you meet together wiht your groupmates to discuss tutorial questions (each group will discuss 1 qn) and every tutorial class ended about 30min earlier. Nearing the end theres a group project report and slides to be done. Report is in the form of QnA so you just answer the questions and slides/ presentation is going through an article of a topic you chose (theres about 10) and you analyse the QR part of it what is good what can be better, etc. Theres also a bit of the stats part with probability and stuff but its a OK. Bell curve steep for finals (40 MCQ, 2h) but most finished in 1h and left the hall, i was one of the few who stayed till the end even though i was just staring at the paper into the depths of my soul for reasons unknown) It’s a lot about experiments not really the scientific/ calculations part of it but understanding about coming up with experiments, the pros and cons of carrying things out a certain way in loose terms something like the art of crafting experiments? makes you think a bit deeper how and what people think and not so dry i guess.
Epilogue
i guess thats a wrap–new semester starts soon :( i think this might be the first module/semester review tumblr blog but i hope this can be of help to anyone, to anyone at all. the owner of many of similar review blogs get really stellar results which i may be too out of league from so i hope this brings comforts to those who are doing not so well and encourage them because im not any different we exist, and we’ll survive.
CARPE DIEM 2020 LETS GEDDIT
0 notes
Photo

Python Programming is designed as a textbook to fulfil the requirements of the first-level course in Python programming. It is suited for undergraduate degree students of computer science engineering, information technology as well as computer applications. This book will enable students to apply the Python programming concepts in solving real-world problems. The book begins with an introduction to computers, problem solving approaches, programming languages, object oriented programming and Python programming. Separate chapters dealing with the important constructs of Python language such as control statements, functions, strings, files, data structures, classes and objects, inheritance, operator overloading and exceptions are provided in the book. • Case studies on creating calculator, calendar, hash files, compressing strings and files, tower of Hanoi, image processing, shuffling a deck of cards and mail merge demonstrate the application of various concepts. • Point-wise summary and glossary of key terms to aid quick recapitulation of concepts. Table of contents 1. Introduction to Computers and Problem Solving Strategies 2. Introduction to Object Oriented Programming 3. Basics of Python Programming 4. Decision Control Statements 5. Functions 6. Python Strings Revisited 7. File Handling 8. Data Structures 9. Classes and Objects 10. Inheritance and Polymorphism 11. Operator Overloading 12. Error and Exception Handling. • About the Author - Reema Thareja is presently Assistant Professor, Department of Computer Science, Shyama Prasad Mukherji College for Women, University of Delhi. #unifeed #onlinebookstore #pune #reemathareja #python #pythonprogramming #oop #programming #code #pythonprogramming (at Python) https://www.instagram.com/p/B2zK3Cxh60l/?igshid=tuzb744iutd4
0 notes
Link
GET THIS BOOK Author: Jimmy Song Published in: O’Reilly Media ISBN: 978-1492-0-3149-9 File Type: pdf File Size: 13 MB Language: English Description Programming Bitcoin book will teach you the technology of Bitcoin at a fundamental level. It doesn’t cover the monetary, economic, or social dynamics of Bitcoin, but knowing how Bitcoin works under the hood should give you greater insight into what’s possible. There’s a tendency to hype Bitcoin and blockchain without really understanding what’s going on; Programming Bitcoin book is meant to be an antidote to that tendency. After all, there are lots of books about Bitcoin, covering the history and the economic aspects and giving technical descriptions. The aim of Programming Bitcoin book is to get you to understand Bitcoin by coding all of the components necessary for a Bitcoin library. The library is not meant to be exhaustive or efficient. The aim of the library is to help you learn. Who Is Programming Bitcoin Book For? Programming Bitcoin book is for programmers who want to learn how Bitcoin works by coding it themselves. You will learn Bitcoin by coding the “bare metal” stuff in a Bitcoin library you will create from scratch. This is not a reference book where you can look up the specification for a particular feature. The material from Programming Bitcoin book has been largely taken from my two-day seminar where I teach developers all about Bitcoin. The material has been refined extensively, as I’ve taught this course more than 20 times, to over 400 people as of this writing. By the time you’re done with the book, you’ll not only be able to create transactions, but also get all the data you need from peers and send the transactions over the network. It covers everything needed to accomplish this, including the math, parsing, network connectivity, and block validation. What Do I Need to Know about Programming Bitcoin? A prerequisite for Programming Bitcoin book is that you know programming—Python, in particular. The library itself is written in Python 3, and a lot of the exercises can be done in a controlled environment like a Jupyter notebook. An intermediate knowledge of Python is preferable, but even a beginning knowledge is probably sufficient to get a lot of the concepts. Some knowledge of math is required, especially for Chapters 1 and 2. These chapters introduce mathematical concepts probably not familiar to those who didn’t major in mathematics. Math knowledge around algebra level should suffice to understand the new concepts and to code the exercises covered in those chapters. General computer science knowledge, for example, of hash functions, will come in handy but is not strictly necessary to complete the exercises in Programming Bitcoin. How Is the Book Arranged? Programming Bitcoin is split into 14 chapters. Each is meant to build on the previous one as the Bitcoin library gets built from scratch all the way to the end. Roughly speaking, Chapters 1–4 establish the mathematical tools that we need; Chapters 5–8 cover transactions, which are the fundamental unit of Bitcoin; and Chapters 9–12 cover blocks and networking. The last two chapters cover some advanced topics but don’t actually require you to write code. Chapters 1 and 2 cover some math that we need. Finite fields and elliptic curves are needed to understand elliptic curve cryptography in Chapter 3. After we’ve established the public key cryptography at the end of Chapter 3, Chapter 4 adds parsing and serialization, which are how cryptographic primitives are stored and transmitted. Chapter 5 covers the transaction structure. Chapter 6 goes into the smart contract language behind Bitcoin, called Script. Chapter 7 builds on all the previous chapters, showing how to validate and create transactions based on the elliptic curve cryptography from the first four chapters. Chapter 8 establishes how pay-to-script-hash (p2sh) works, which is a way to make more powerful smart contracts. Chapter 9 covers blocks, which are groups of ordered transactions. Chapter 10 covers network communication in Bitcoin. Chapters 11 and 12 go into how a light client, or software without access to the entire blockchain, might request and broadcast data to and from nodes that store the entire blockchain. Chapter 13 covers Segwit, a backward-compatible upgrade introduced in 2017, and Chapter 14 provides suggestions for further study. These chapters are not strictly necessary, but are included as a way to give you a taste of what more there is to learn. Chapters 1–12 have exercises that require you to build up the library from scratch. The answers are in Appendix A and in the corresponding chapter directory in the GitHub repo. You will be writing many Python classes and building toward not just validating transactions/blocks, but also creating your own transactions and broadcasting them on the network. The last exercise in Chapter 12 specifically asks you to connect to another node on the testnet network, calculate what you can spend, construct and sign a transaction of your devising, and broadcast that on the network. The first 11 chapters essentially prepare you for this exercise. There will be a lot of unit tests that your code will need to pass. The book has been designed this way so you can do the “fun” part of coding. To aid your progress, we will be looking at a lot of code and diagrams throughout.
0 notes
Text
A Machine Learning Guide for Average Humans
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
https://ift.tt/2q13Myy xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY Bạn có thể xem thêm địa chỉ mua tai nghe không dây tại đây https://ift.tt/2mb4VST
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
Text
CBSE Class 12 Computer Science and Informatics Practice
CBSE Class 12 Computer Science and Informatics Practice
numpy-class-12-computer-scienceDownload pandas-class-12-computer-scienceDownload
youtube
View On WordPress
#class 12 computer science all chapter#class 12 computer science all in one#class 12 computer science amplify learning#class 12 computer science answers#class 12 computer science apni kaksha#class 12 computer science array#class 12 computer science assignment#class 12 computer science board paper 2019#class 12 computer science book#class 12 computer science book back answers#class 12 computer science book ncert#class 12 computer science book python#class 12 computer science book sumita arora#class 12 computer science by apni kaksha#class 12 computer science by unacademy#class 12 computer science c++#class 12 computer science chapter 1#class 12 computer science chapter 1 pseb#class 12 computer science chapter 2#class 12 computer science chapter 3#class 12 computer science chapter 4#class 12 computer science data file handling#class 12 computer science data structures#class 12 computer science full form#class 12 computer science functions notes#class 12 computer science guide#class 12 computer science python#class 12 computer science python chapter 1#class 12 computer science python chapter 3#class 12 computer science python chapter functions
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger https://ift.tt/2rJmIjM via IFTTT
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger https://ift.tt/2IKAbmd via SW Unlimited
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
(function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); .hypotext-content { position: relative; padding: 10px; margin: 10px 0; border-right: 5px solid; } a.hypotext { border-bottom: 1px solid; } .hypotext-content .close:before { content: "close"; font-size: 0.7em; margin-right: 5px; border-bottom: 1px solid; } a.hypotext.close { display: block; position: absolute; right: 0; top: 0; line-height: 1em; border: none; }
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://tracking.feedpress.it/link/9375/9157255
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
(function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); .hypotext-content { position: relative; padding: 10px; margin: 10px 0; border-right: 5px solid; } a.hypotext { border-bottom: 1px solid; } .hypotext-content .close:before { content: "close"; font-size: 0.7em; margin-right: 5px; border-bottom: 1px solid; } a.hypotext.close { display: block; position: absolute; right: 0; top: 0; line-height: 1em; border: none; }
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://tracking.feedpress.it/link/9375/9157255
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
(function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); .hypotext-content { position: relative; padding: 10px; margin: 10px 0; border-right: 5px solid; } a.hypotext { border-bottom: 1px solid; } .hypotext-content .close:before { content: "close"; font-size: 0.7em; margin-right: 5px; border-bottom: 1px solid; } a.hypotext.close { display: block; position: absolute; right: 0; top: 0; line-height: 1em; border: none; }
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://ift.tt/2k11avr via IFTTT
0 notes
Text
A Machine Learning Guide for Average Humans
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
xem them tai https://ift.tt/2o9GYfe A Machine Learning Guide for Average Humans xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm t���i: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ A Machine Learning Guide for Average Humans https://ift.tt/2rFyAUY Bạn có thể xem thêm địa chỉ mua tai nghe không dây tại đây https://ift.tt/2mb4VST
0 notes